ACM MMSPORTS2022 DEEPSPORTRADAR实例细分挑战的目标是解决个人人类的细分,包括球员,教练和裁判在篮球场上。这项挑战的主要特征是,玩家之间存在很高的阻塞,数据量也非常有限。为了解决这些问题,我们设计了一个强大的实例分割管道。首先,我们对此任务采用了适当的数据增强策略,主要包括光度失真变换和复制式策略,该策略可以生成更多具有更广泛分布的图像实例。其次,我们采用了强大的分割模型,基于SWIN基础的CBNETV2骨架上的基于混合任务级联的检测器,并将Maskiou Head添加到HTCMASKHEAD,可以简单有效地改善实例细分的性能。最后,采用了SWA培训策略来进一步提高性能。实验结果表明,所提出的管道可以在DeepSportradar挑战中取得竞争成果,而挑战集则以0.768AP@0.50:0.95。源代码可在https://github.com/yjingyu/instanc_segentation_pro中获得。
translated by 谷歌翻译
在半监督的学习领域中,作为GNN的变体模型,图形卷积网络(GCN)通过将卷积引入GNN来实现非欧盟数据的有希望的结果。但是,GCN及其变体模型无法安全地使用风险未标记数据的信息,这将降低半监督学习的性能。因此,我们提出了一个安全的GCN框架(SAFE-GCN),以提高学习绩效。在Safe-GCN中,我们设计了一个迭代过程来标记未标记的数据。在每次迭代中,学会了GCN及其监督版本(S-GCN),以高信任地找到未标记的数据。然后将高信心的未标记数据及其伪标签添加到标签集中。最后,两者都添加了未标记的数据和标记的数据来训练S-GCN,该S-GCN可以安全地探索风险未标记的数据,并可以安全使用大量未标记的数据。在三个众所周知的引用网络数据集上评估了安全性GCN的性能,并且获得的结果证明了该框架对几种基于图的半监督学习方法的有效性。
translated by 谷歌翻译
由于无频率,隐私保护和RF信号的广泛覆盖性质,设备自由人的手势识别已得到赞誉。然而,在应用于新域时,从特定域收集的数据训练以识别的神经网络模型受到显着的性能下降。为了解决这一挑战,我们通过有效使用未标记的目标域数据,为设备免费手势识别提出了无监督的域适应框架。具体而言,我们使用伪标签和一致性正则化,并在目标域数据上进行详细设计,以生成伪标签并对齐目标域的实例特征。然后,我们通过随机擦除输入数据来设计两个数据增强方法以增强模型的稳健性。此外,我们应用置信控制约束来解决过度频繁问题。我们对公共WiFi数据集和公共毫米波雷达数据集进行了广泛的实验。实验结果表明了所提出的框架的优越效果。
translated by 谷歌翻译
使用毫米波(MMWAVE)信号的人类手势识别提供有吸引力的应用,包括智能家居和车载界面。虽然现有的作品在受控设置下实现有前途的性能,但实际应用仍然有限,因为需要密集数据收集,适应新域时的额外培训努力(即环境,人员和地点)和实时识别的表现不佳。在本文中,我们提出了Di-Gesture,一个独立于域和实时MMWAVE手势识别系统。具体地,我们首先导出与具有空间时间处理的人体手势对应的信号变化。为了增强系统的稳健性并减少数据收集工作,我们根据信号模式与手势变化之间的相关性设计数据增强框架。此外,我们提出了一种动态窗口机制来自动且准确地执行手势分割,从而能够实时识别。最后,我们建立了一种轻量级神经网络,以从用于手势分类的数据中提取空间信息。广泛的实验结果表明,Di-Gesture分别为新用户,环境和地点的平均精度为97.92%,99.18%和98.76%。在实时场景中,Di-Gesutre的准确性达到97%以上,平均推断时间为2.87ms,这表明了我们系统的优越稳健性和有效性。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译